![]() aid to orient a camera at different zoom levels
专利摘要:
Aspects of the present invention refer to systems and methods to assist in positioning a camera at different zoom levels. An example device can include memory configured to store image data. The example device may further include a processor in communication with the memory, the processor being configured to process a first image stream associated with a scene, independently processing a second image stream associated with a spatial portion of the scene in which the second stream is different from the first image stream, emitting the first processed image stream, and outputting a visual indication indicating the spatial portion associated with the second image stream during the output of the first processed image stream. 公开号:BR112020004680A2 申请号:R112020004680-9 申请日:2018-09-10 公开日:2020-09-15 发明作者:Cullum Baldwin;Pasi Parkkila 申请人:Qualcomm Incorporated; IPC主号:
专利说明:
[001] [001] This patent application claims priority to Non-Provisional Application No. 15 / 701,293, entitled "ASSIST FOR ORIENING A CAMERA AT DIFFERENT ZOOM LEVELS", filed on September 11, 2017 and assigned to this assignee and here expressly incorporated by reference. TECHNICAL FIELD [002] [002] The present disclosure generally refers to systems for image capture devices and specifically the positioning of a camera for recording images or video. FUNDAMENTALS OF RELATED TECHNIQUE [003] [003] Many devices and systems (such as smart phones, tablets, digital cameras, security systems, computers and so on) include cameras for various applications. For many camera systems, when taking images or recording video, a viewfinder for the camera (such as a smart card or table viewer) previews images or video that are captured by the camera. For example, if a camera zooms in while taking images or recording video, the viewfinder shows the enlarged view (which is what is being recorded using the camera). SUMMARY [004] [004] This summary is provided to introduce in a simplified form a selection of concepts that are further described below in [005] [005] Aspects of the present invention refer to the aid in positioning a camera at different zoom levels. In some implementations, an example device may include memory configured to store image data. The sample device may also include a processor in communication with the memory, the processor being configured to: process a first image stream associated with a scene; independently processing a second image stream associated with a spatial portion of the scene, where the second image stream is different from the first image stream; output of the first processed image stream; and output, during the output of the first processed image stream, a visual indication that indicates the spatial portion associated with the second image stream. [006] [006] In another example, a method is described. The exemplary method includes processing, by a processor, a first image stream associated with a scene; independently processing, by the processor, a second image stream associated with a spatial portion of the scenario, in which the second image stream is different from the first image stream; output the first processed image stream; and outputting, during the output of the first processed image stream, a visual indication that indicates the spatial portion associated with the second image stream. [007] [007] In another example, a non-transitory computer-readable medium is described. The non-transitory computer-readable medium can store instructions that, when executed by a processor, cause a device to perform operations including processing, by a processor, a first image stream associated with a scene; independently processing, by the processor, a second image stream associated with a spatial portion of the scenario, in which the second image stream is different from the first image stream; output the first processed image stream; and outputting, during the output of the first processed image stream, a visual indication that indicates the spatial portion associated with the second image stream. [008] [008] In another example, a device is described. The device includes means for processing a first image stream associated with a scene; means for independently processing a second image stream associated with a spatial portion of the scene, wherein the second image stream is different from the first image stream; means for outputting the first processed image stream; and means for emitting, during the output of the first processed image stream, a visual indication indicating the spatial portion associated with the second image stream. BRIEF DESCRIPTION OF THE DRAWINGS [009] [009] Aspects of the present invention are illustrated by way of example, and not by way of limitation, in the figures in the accompanying drawings and in which the same reference numbers refer to similar elements. [0010] [0010] Figure 1A illustrates an example device that includes multiple cameras. [0011] [0011] Figure 1B illustrates another example device that includes multiple cameras. [0012] [0012] Figure 2A is a block diagram of an example image processing device. [0013] [0013] Figure 2B is another block diagram of an example image processing device [0014] [0014] Figure 3 is a block diagram of an illustrative image signal processor. [0015] [0015] Figure 4A illustrates an exemplary scene being captured by a camera. [0016] [0016] Figure 4B illustrates the exemplary scene in Figure 4A, an intended part of the scene in Figure 4A to be recorded, and an actual part of the scene in Figure 4A being recorded. [0017] [0017] Figure 5A illustrates an example of previewing the scene of figure 4A being captured and a visual indication of the part of the scene of figure 4A or to be recorded. [0018] [0018] Figure 5B illustrates an example of a first camera to capture the scenario of Figure 4A and a second camera that captures the portion of the scene in Figure 4A or to be recorded. [0019] [0019] Figure 6 is an illustrative flowchart that represents an exemplary operation to issue a visual indication. [0020] [0020] Figure 7 illustrates an example of a display switch between a preview of the scene in Figure 4A being captured and the portion of the scene in Figure 4A or to be recorded. [0021] [0021] Figure 8 is an illustrative flowchart that represents an exemplary operation for switching between a scene being captured and a portion of the scene being recorded. [0022] [0022] Figure 9A illustrates an example of visualization with the preview of the scene of Figure 4A being captured and the part of the scene of Figure 4A or to be recorded. [0023] [0023] Figure 9B illustrates an example of a display switch between a preview of the scene part of Figure 4A or to be recorded and a simultaneous preview of the scene of Figure 4A being captured and the scene part of Figure 4A or to be registered. [0024] [0024] Figure 9C illustrates an example of a display switch between a zoom help preview of the scene in Figure 4A, a preview of the part of the scene in Figure 4A or to be recorded, and a preview of the scene in Figure 4A. [0025] [0025] Figure 10 is an illustrative flowchart that represents an exemplary operation for switching between a simultaneous preview of a scene and a preview of the scene or to be recorded. [0026] [0026] Figure 11A illustrates an example of previewing the scene in Figure 4A being captured and resizing a visual indication of the scene in Figure 4A or to be recorded. [0027] [0027] Figure 11B illustrates an example of previewing the scene of Figure 4A being captured and moving a visual indication of the scene in Figure 4A or to be recorded. [0028] [0028] Figure 12 is an illustrative flow chart representing an exemplary operation to adjust the portion of the scene in Figure 4A or to be registered based on a user who sets a visual indication of the portion of the scene in Figure 4A being or to be registered. [0029] [0029] Figure 13A illustrates an exemplary scene being captured by a first camera and an exemplary portion of the scene captured or capable of being captured by a second camera. [0030] [0030] Figure 13B illustrates the exemplary scene in Figure 13A and the part of the scene in Figure 13 to be or to be recorded. [0031] [0031] Figure 14 is an illustrative flowchart that represents an exemplary operation for switching between a first image stream and a second image stream. [0032] [0032] Figure 15 is an illustrative flowchart that represents an exemplary operation for switching from a second processed image stream to a first processed image stream. [0033] [0033] Figure 16 is an illustrative flowchart that represents another exemplary operation for switching from a second processed image stream to a first processed image stream. DETAILED DESCRIPTION [0034] [0034] Aspects of the present invention can assist a user in positioning a camera for recording images or video and may be applicable to devices that have a variety of camera configurations. [0035] [0035] Each camera includes a field of view to capture a scene. Many people use a zoom (magnification) to capture only a portion of the scene in the field of view. The zoom can be optical, and the camera lens is moved to change the focal length of the camera (allowing an object in the scene to be magnified without loss of resolution). The zoom can alternatively be digital, and portions of a captured scene are cut. If portions of the captured scene are cut, the remaining portion can be stretched over a viewfinder (causing some loss of resolution). [0036] [0036] In a "zoomed" view, camera movements (such as panning or tilting a camera, or a user moving the mobile phone or tablet while recording) are amplified (since the object in the view can be enlarged or stretched), resulting in the formation of pre-visualized images or video (and, therefore, the recorded image or video) not including or being centered on the object or scene intended to be recorded by the user. For example, if a device is recording images or video of a football game, and the user commands the camera to approach a specific player or ball, the player or ball may not remain centered (or even present) in the field of view of the player. camera while magnified. A user can choose to position the camera, for example, trying to find the player or ball in the camera's viewfinder zoom. It is desirable to assist the user in positioning the camera, for example, to reduce the user's difficulties in positioning the camera during an enlarged view. [0037] [0037] In some implementations, at least a portion of a device (such as one or more processors) can process a first image stream (from a first camera) associated with a scene, independently processing a second image stream (from a second camera) associated with a spatial portion of the scene other than the first image stream, output of the first processed image stream (such as for preview via a viewfinder), and output of a visual indication of the spatial portion of the associated scene to the second image stream during the output of the first processed image stream (as in the preview of the first processed image stream). In this way, aspects of the present invention can prevent a user from adjusting to position a camera at different zoom levels while capturing and / or recording images or video, and can also allow a device to capture and / or record more effectively target portions of a scene. For example, as noted above, a user who records a football game can use the visual cue in a preview to identify which part of the scene is being recorded and thus position the camera as desired, without searching in an enlarged view for preview. [0038] [0038] The terms "capture" and "record" are used to differentiate between images from a camera sensor before being processed or not fully processed (such as for a preview) and images that are fully processed (such as for storage for later viewing or for continuous transmission to others). When a camera sensor is active, the camera can continuously (as at a predetermined number of frames per second) "capture" images. Many of these images are discarded and are not used or fully processed (such as the application of denominations, border zoom, color balance and other filters by an image signal processor). "Recorded" images or video are captures that the user intends or requests that requests are fully processed by an image signal processor (such as pressing a camera button for an image, pressing a video recording button, and so on) onwards). A preview of the camera shows capture without being fully processed. If the camera's preview shows capture being recorded, the preview shows partially processed captures while the image signal processor fully processes the captures. If the camera preview shows capture before an image or video must be recorded, the preview shows partially processed captures that are discarded after the preview. Once the captures are completely processed for recording, the resulting images or video can, for example, be stored in memory for later viewing, continuously transmitted to others for live viewing, and so on. [0039] [0039] In the description below, numerous specific details are presented as examples of specific components, circuits and processes for the provision of a complete understanding of the present description. [0040] [0040] It should be kept in mind, however, that all these terms and similar terms must be associated with the appropriate physical quantities and are merely convenient labels applied to these quantities. Unless specifically stated otherwise as apparent from the following discussions, it is appreciated that throughout this application, discussions using terms such as "access", "receiving", "sending", "using", "selection" "," determination "," normalization "," multiplication "," average "," monitoring "," comparison "," application "," update "," measurement "," derivation "or similar, refers to actions and processes of a computer system, or similar electronic computing device, that manipulates and transforms the data represented as physical (electronic) quantities within the registers and memories of the computer system into other data, represented in a similar way as physical quantities within the memories or records of the computer system or other such information storage, transmission or display devices. [0041] [0041] In the figures, a single block can be described as executing a function or functions; however, in real practice, the function or functions performed by that block can be performed on a single component or through multiple components and / or can be performed using hardware, using software, or using a combination of hardware and software. To clearly illustrate this interchangeability of hardware and software, several illustrative components, blocks, modules, circuits and steps are described below generally in terms of their functionality. Whether such functionality is implemented as hardware or software depends on the specific application and the design restrictions imposed on the global system. Those skilled in the art can implement the described functionality in different ways for each specific application, but such implementation decisions should not be interpreted as departing from the scope of the present description. Also, exemplary devices may include components other than those shown, including well-known components such as a processor, memory and the like. [0042] [0042] Aspects of the present invention are applicable to any suitable processor (such as an image signal processor) or device (such as smart phones, tablets, laptop computers, digital cameras, network cameras and so on) that include a or more cameras, and can be implemented for a variety of camera configurations. While the parts of the description below and examples use two cameras for one device, in order to describe aspects of the disclosure, the disclosure applies to any device with at least one camera. For devices with multiple cameras, the cameras may have similar characteristics or different capabilities (such as resolution, color or black and white, a wide-view lens versus telephoto, zoom capabilities and so on). [0043] [0043] Figure 1A illustrates an example device 100 that includes a dual camera with a first camera 102 and a second camera 104 arranged in a first configuration. Figure 1B illustrates another example device 110 that includes a dual camera with a first camera 112 and a second camera 114 arranged in a second configuration. In some respects, one of the cameras (such as the first cameras 102 and 112) can be a primary camera, and the other camera (such as the second cameras 104 and 114) can be an auxiliary camera. Additionally or alternatively, the second cameras 104 and 114 may have a different focal length, capture rate, resolution, color palette (such as color versus black and white) and / or field of view or capture than the first cameras 102 and 112. in some respects, the first cameras 102 and 112 may each include a wide field of view and the second cameras 104 and 114 may include a narrower field of view (such as a telephoto camera). In other respects, the first cameras 102 and 112 may have the same capabilities as the second cameras 104 and 114, but do not have the same speed as a scene to be captured or recorded. [0044] [0044] Although an example device is presented which can be a smart phone or table, including a dual camera, aspects of the present modalities can be implemented in any device with an image signal processor. Another example device is useful (such as smart watches) that can connect to a smart phone or table. Another example device is an augmented or virtual reality headset coupled (for example, in communication with or physically connected A) to cameras. Another example device is a drone or automobile attached to or including cameras. Another example device is a video security system coupled with security cameras. The examples are for illustrative purposes only, and the description should not be limited to any specific example or device. [0045] [0045] The term "device" is not limited to one or a specific number of physical objects (such as a smart phone). As used here, a device can be any electronic device with multiple parts that can implement at least some parts of this description. Although the description and examples below use the term "device" to describe various aspects of this description, the term "device" is not limited to a specific configuration, type or number of objects. [0046] [0046] Figure 2A is a block diagram of an example device 200 that includes multiple cameras. Example device 200, which can be an implementation of devices 100 and 110 of Figures 1A and 1B, can be any suitable device capable of capturing images or video including, for example, wired and wireless communication devices (such as telephones) camera, smart phones, tablets, security systems, dashed cameras, laptop computers, desktop computers, useful cameras, drones and so on), digital cameras (including video cameras, video cameras and so on), or any other suitable device. Example device 200 is shown in Figure 2A to include a first camera 202, a second camera 204, a processor 206, a memory 208 that stores instructions 210, a camera controller 212, a viewfinder 216, and a number of components entrance exit [0047] [0047] The first camera 202 and the second camera 204 may be able to capture individual image frames (such as still images) and / or capture video (such as a succession of captured image frames). A succession of still images or video from a camera can be called an image stream. The first camera 202 and the second camera 204 can include one or more image sensors (not shown for simplicity) and shutters to capture an image stream and supply the captured image stream to the camera controller 212. [0048] [0048] Memory 208 may be a non-transitory or non-transitory computer-readable medium that stores computer-executable instructions 210 to perform all or part of one or more operations described in this description. Device 200 may also include a power supply 220, which can be coupled or integrated with device 200. [0049] [0049] Processor 206 may be one or more suitable processors capable of executing scripts or instructions from one or more software programs (such as instructions 210) stored within memory 208. In some aspects, processor 206 may be one or more general purpose processors that execute instructions 210 to cause device 200 to perform any number of different functions or operations. In additional or alternative aspects, the 206 processor may include integrated circuits or other equipment to perform functions or operations without the use of software. Although shown to be coupled to each other via processor 206 in the example in Figure 2, processor 206, memory 208, camera controller 212, viewfinder 216 and I / O components 218 can be coupled to each other in several provisions. For example, processor 206, memory 208, camera controller 212, viewfinder 216 and / or I / O components 218 can be coupled to each other via one or more local buses (not shown for simplicity) . [0050] [0050] The display 216 can be any suitable display or screen allowing user interaction and / or to present items (such as captured images and video) for viewing by a user. In some respects, the display 216 may be a touch sensitive display. The L / O 218 components can be or include any suitable mechanism, interface or device to receive input (such as commands) from the user and to provide output to the user. For example, I / O 218 components may include (but are not limited to) a graphical user interface, keyboard, mouse, microphone and speakers, and so on. [0051] [0051] Camera controller 212 may include an image signal processor 214, which may be one or more image signal processors, for processing captured image frames or video provided by the first camera 202 and / or the second camera 204 In some exemplary implementations, camera controller 212 (such as using the 214 image signal processor) can control the operation of the first camera 202 and the second camera 204. In some aspects, the image signal processor 214 can perform instructions from a memory (such as instructions 210 from memory 208 or instructions stored in a separate memory coupled to the image signal processor 214) to control the operation of cameras 202 and 204 and / or process and provide one or more image streams for recording or preview. In other respects, the image signal processor 214 may include specific hardware to control the operation of cameras 202 and 204 and / or to process and provide one or more image streams for recording or preview. The image signal processor 214 may alternatively or additionally include a combination of specific hardware and the ability to execute software instructions. [0052] [0052] Although Figure 2A illustrates an exemplary implementation of a device 200, it is not necessary for the device 200 to include all the components shown in Figure 2A. Figure 2B illustrates another exemplary implementation of device 200, showing that device 200 does not need to include all the components illustrated in Figure 2A. For example, device 200 may include image signal processor 214 (which may be part of a camera controller) [0053] [0053] Alternatively or additionally, the display 216 can be separated and attached to the device 200. In one example, the display 216 can be a wireless display attached to the device that performs image processing. For example, a drone can provide capture to a user's keyboard or smart phone for preview, and the drone can include an image signal processor to fully process any capture intended for recording. I / O components 218 can also be separated from device 200. For example, if the input is a graphical user interface, some or all I / O components may be part of display 216. [0054] [0054] As shown, various configurations and types of devices can be used in the implementation of aspects of the present invention. As a result, exposure should not be limited to a specific device type or configuration. [0055] [0055] Figure 3 is a block diagram of an exemplary image signal processor 300. Image signal processor 300 can be an implementation of image signal processor 214 of camera controller 212 illustrated in Figure 2. The image signal processor 300 may be a single-line (or single-core) processor with a filter sequence 302A -302N to process a first image stream. In an exemplary implementation, filter 1 (302A) can be a noise reduction filter, filter 2 (302B) can be an edge zoom filter, and filter N (302N) can be a final filter for processing complete image flow. Alternatively, the image signal processor 300 may be a multi-line (or multi-core) processor with one or more additional filter strings 304A-304N to process other image streams. [0056] [0056] When a user uses a conventional device to record or capture a scene (such as a parent using a smart phone to record their child's soccer game or piano), the scene is typically previewed on a device display. When the user moves into a part of the scene (such as zooming in on his child), the device previews the enlarged part of the scene. If the zoom is a digital zoom, the device can predict only a portion of the scene captured by the device's camera. In this way, a conventional camera controller can output only a portion of the image stream being captured for preview. If a user loses the track of an object to be recorded in an enlarged view and the preview does not show the object, the user may experience difficulties in positioning (such as moving (e.g., left, right, up, [0057] [0057] Figure 4A illustrates an exemplary scene 402 being captured by a camera. For the purposes of discussion here, exemplary scene 402 is captured or recorded by the first camera 202 or the second camera 204 (or both the first camera 202 and the second camera 204) of device 200 in figure 2. The exemplary scene 402 is a player scoring a goal during practice. A user may wish to approach the ball while recording scene 402, and an enlarged scene preview being recorded may differ from what the user intends to record. Figure 4B represents the exemplary scene 402 of Figure 4A, represents a real portion 404 of the enlarged scene, and represents an intended portion 406 of the scene to be recorded. If the device previews the 404 portion of the scene or to be recorded, the user may have difficulties with determining that the location of the desired portion 406 of the scene to be recorded is, in this example, to the right and below the location of the scene. 404 portion of the scene or to be recorded. As a result, the user cannot know in which direction to move or orient the camera. For example, the user can spend time moving the device to try to find the ball in the zoom, or can spend reducing and resizing device features to relocate the ball in the preview. As a result, a user may be too late in recording a desired image or video or otherwise miss opportunities to record a planned object at a planned time (such as recording your child when scoring a goal). [0058] [0058] According to various aspects of the present invention, a viewfinder (such as the example viewfinder 216, which can be attached or included in a device) can be configured to assist a user in positioning one or more cameras. In some implementations, the viewer can predict a scene that is being captured by a camera (which includes the part of the scene that is being recorded). When previewing the scene, a viewfinder (in communication with an image processing device) can display a visual indication of the part of the scene being recorded. In this way, the user is able to readily determine the location of the part of the scene being recorded in relation to the scene being captured. [0059] [0059] Figure 5A illustrates an example display 502 previewing scene 402 of Figure 4A is captured and a visual indication 504 of the scene part or to be recorded. For simplicity and brevity, discussions related to a part of a scene or to be recorded, as indicated by the visual indication of this exhibition, can be referred to as part of the scene being recorded. The display 502 can be the display 216 of the device 200 of Figures 2A and 2B, with the device 200 recording the part of the scene indicated by visual indication 504, but any suitable device configuration can be used. While visual indication 504 is illustrated as a dashed rectangle that indicates that the portion of the scene is being registered, visual indication 504 can take any shape to indicate the part of the scene being registered. In some other examples, the visual indication 504 may indicate the part of the scene as a grayish colored area or another contrasted area of the preview, it may indicate that the corners of the part of the scene being recorded include a marker that identifies the center of the scene. part of the scene being registered, a box of solid lines that indicates the part of the scene being registered, and so on. In addition, recording a portion of the scene includes video recording, as well as one or more still images. In this way, a user can prepare visual indication 504 to be centered in a part of the scene so that an image or video that is recorded is recorded. [0060] [0060] The capture stream from a camera can include an entire scene, and a spatial portion (for example, the spatial portion of the scene defined / limited by visual indication 504) of that current is output (such as by camera controller 212 of the figure) for recording. Alternatively, a first camera camera can include the entire scene while a second camera stream includes the portion of the scene being recorded. Figure 5B illustrates an example of the first camera 202 of the device 200 capturing the scene 506 and the second camera 204 of the device 200 capturing the portion 508 or to be recorded in the scene 506. In this way, a viewfinder 216 can predict the scene [0061] [0061] In additional or alternative implementations, a device can be configured to record one or both capture streams. In one example, the device can ask a user to select one of the streams to be recorded or automatically select the stream to be recorded. In another example, a capture stream can be used exclusively for previews. The device can prompt the user to select which capture stream is used for preview, or the device can automatically select the capture stream to be used for preview. In an additional example, the device automatically records both capture streams. If recording both capture streams, a user may be asked to select one or both records for storage (discard unselected recordings). [0062] [0062] In some exemplary implementations, the first camera 202 and the second camera 204 are part of the dual camera of the device 200. In some dual cameras, the two cameras 202 and 204 have similar capabilities, such as resolution, color palette, zoom , focal length, shutter speed, and so on. Alternatively, the two cameras 202 and 204 can have different capacities. In some instances, the first camera 202 may be a color camera and the second camera 204 may be a black and white camera that may have better fidelity at low light settings. In some other examples, the first camera 202 may be a high resolution camera (such as 8 megapixels, 13 megapixels, and so on) and the second camera 204 may be a lower resolution camera than the first camera 202, for example example, to assist the first autofocus camera. In some additional examples, the first camera 202 may be a wide-angle viewing camera and the second camera 204 may be a narrower field of viewing camera with a longer focal length than the first camera 202, for example, for capture objects beyond the device. Some devices may include different configurations for a dual camera, and other devices may include more than two cameras. The implementations of the present invention apply to any camera configuration and device type. The examples provided using a two camera setup are provided for illustrative purposes only, and the exposure should not be limited to or by any of the examples provided. [0063] [0063] Figure 6 is an illustrative flow chart representing an exemplary operation 600 for emitting a visual indication. The visual indication issued can be presented on a display to assist a user in positioning a camera, so that the device can receive capture streams, including the intended scene being recorded. For example, a user can use the visual indication to identify how to position a camera in order to have its field of view that includes a desired scene. The exemplary operation 600 is described below with respect to two image streams (such as a first stream of the first camera 202 and a second image stream of the second camera 204 of the device 200). However, aspects apply to any number of image streams (including an image stream or three or more image streams). [0064] [0064] Although example operations (including example operation 600) are described with respect to device 200, camera controller 212, or image signal processor 214, any capable component or module can perform the operation. For example, even though a device 200 or a camera controller 212 can be described as performing an operation, an image signal processor 214 (as in Figure 2A or 2B), processor 206 (Figure), camera controller 212 ( Figure a) or another module or component can perform the operation. In addition, the term "output" may include the provision of a separate device, the provision of a remote display, the display on an included display of a device, the provision of a component from one device to another component of the device, or provision from a component of a device to a module external to the device. The description should not be limited to specific components or modules that perform operations as described in the example operations. [0065] [0065] Starting at 602, device 200 can process a first image stream associated with an entire scene. In one example, the first camera 202 can capture scene 506 (Figure 5B) and provide the first image stream to camera controller 212 of device 200, which processes the first incoming image stream. In another example, camera controller 212 of device 200 processes an image stream received from another device or stored in memory (such as memory 208). In some example implementations, the image signal processor 214 can process the first image stream. For example, the image signal processor 300 (Figure) can apply filters 302A A 302N to the first image stream. In addition or alternatively, the image signal processor 300 can perform other functions for the first image stream, such as cropping, zooming, deinterlacing and so on. [0066] [0066] The camera controller 212 of device 200 can also independently process a second image stream associated with a spatial portion (e.g., a portion less than or equal to an entire length) of the scene in the first image stream (604). In one example, the second camera 204 can capture the spatial portion 508 of scene 506 (Figure 5B) and supply the resulting second image stream to camera controller 212, which processes the incoming second image stream. In other examples, the second image stream may be provided by another device or from memory (such as memory 208). With reference to the previous example of the image signal processor 300 applying filters 302A-302N to the first image stream, the image signal processor 300 can apply filters 304A-304N to the second image stream, and can also or alternatively perform other functions such as cropping, zooming, deinterlacing and so on. [0067] [0067] In some exemplary implementations for processing the second image stream, device 200 (such as camera controller 212 or image signal processor 214) can determine the size and location of a visual cue in scene 506 of the first image stream (606). For example, device 200 can embed the visual cue in the first processed image stream, create a layer in the first processed image stream, enlarge the first processed image stream, and so on, so that the captured scene 506 and the indication visual can be displayed simultaneously (such as on a 216 display). [0068] [0068] It should be noted that this exposure should not be limited to the second camera 204 having a smaller field of view than the first camera 202. For example, the cameras may be at different zoom levels, so that a camera with a larger field of view than the other camera captures a smaller portion of the scene than the other camera. In addition, as noted earlier, the image signal processor 214 can be any number of processors, lines and / or cores. In some example implementations, the image signal processor 214 and processor 206 may be part of a System on Chip (SC). As such, it should be noted that several aspects of the present invention are not limited to any specific hardware. [0069] [0069] Proceeding to 608, device 200 can output the first image stream processed for preview. In some example implementations, camera controller 212 can generate the first processed image stream. Upon generating the first processed image stream, device 200 can predict scene 506 on display 216 (610). For example, device 200 can output the first processed image stream to the display 216, and the display 216 previews scene 506 by displaying the first processed image stream. During or after the generation and output of the first processed image stream (such as by camera controller 212), the visual indication of the spatial portion of the scene associated with the second image stream can also be output (612). The display 216 can optionally display the visual indication in scene 506 being previewed (614). For example, the first processed image stream may include the visual cue or otherwise be linked to the visual cue so that the camera controller 212 can associate the visual cue with the portion of the scene to be recorded in the first processed image stream . [0070] [0070] While the exemplary operation 600 shows the processing of the image streams in sequence and generation of the first processed image stream and the visual indication in sequence, the processing of different image streams can be performed concurrently and emitting or generating a stream of processed image and visual indication can also be performed simultaneously. The second processed image stream can be output while outputting the first processed image stream. For example, camera controller 212 can generate at least a portion of the second processed image stream associated with the portion of the scene being recorded. The processed image stream can be stored in a device's memory (such as 208 memory) or output to another device or memory. If the viewfinder 216 previews the first processed image stream and the visual indication, the visual indication indicates that the portion of the scene captured in the first processed image stream is recorded in at least a portion of the second image stream. [0071] [0071] While the exemplary operation 600 (and the other example operations in this description) describes the use of the second processed image stream for recording, only a portion of the second processed image stream can be used for recording. For example, a digital zoom can be used for the second camera 204, so that the entire scene captured by the second camera 204 should not be recorded. It should be noted that an image flow can refer to one or more temporal or spatial parts of the image flow or the current in its entirety, and the description should not be limited to a specific implementation of an image flow. [0072] [0072] With a visual indication shown in a preview of the scenario, the user can better understand how to move the device or cameras in order to locate an object of interest in the stream (or part of the stream) being or to be recorded. The preview of the scene with the visual indication can be called the "zoom aid" preview. In some example implementations, the user may want to switch between two or more of: (1) the zoom help preview in which the preview can both represent an indication of the part of a scene being recorded, as well as a preview of the entire scene being captured; (two) [0073] [0073] Figure 7 illustrates an exemplary switching operation 706 of the display 216 between the auxiliary approach preview 702 and the preview 704 of the part of the scene being recorded. In some example implementations, device 200 may allow a button, menu, gesture, or other user-based command to determine when to switch between previews 702 and 704. In some implementations, the command toggles between activation and activation. disable the zoom help preview feature. In some other implementations, the command changes the preview while one or more other previews are still generated, but not displayed or issued for display. [0074] [0074] In one example, viewfinder 216 may feature a zoom assist button 708 to allow the user to switch between previews 702 and 704, with preview 702 including a visual indication 710 indicating that the scene portion is or should be registered. In other examples, the user can use a pickup gesture, shake the device 200, or provide audible, visual or other touch commands to switch between previews 702 and 704. [0075] [0075] Additionally or alternatively, the decision to switch between previews 702 and 704 can be determined automatically (for example, "in flight") by device 200 based on one or more criteria. For example, one or more criteria for causing device 200 to switch from preview 704 to preview 702 may include camera movement, an object that moves to the captured part of the scene, movement of an object of interest (for example, an object of interest can be automatically tracked by device 200 for recording), and so on, which can cause device 200 to display the zoom assist preview 702 to assist a user in positioning the first camera 202 and / or second camera 204. Likewise, one or more criteria for causing device 200 to switch from preview 702 to preview 704 may include an evaluation (or analysis) of a certain amount of movement, determined depth and / or determined size, for example, an object of interest being tracked (for example, tracked based on user selection or predetermined criteria) in relation to various associated threshold levels. [0076] [0076] Figure 8 is an illustrative flow chart representing an exemplary operation 800 for switching between a scene being captured (such as the scene in a first image stream being captured by the first camera 202) and a portion of the scene being or to be recorded (such as at least a portion of the second image stream captured by the second camera 204). For example, camera controller 212 (operating alone or in collaboration with other components, such as processor 206) can determine whether the switch between the output (such as for preview) of the first processed image stream (and the visual indication) and the second image stream processed. The output image stream can be the image stream previewed by viewfinder 216. [0077] [0077] In some exemplary implementations, camera controller 212 can generate to record more than one image stream (such as the first image stream and the second image stream). In one example, camera controller 212 can generate and output to record the processed image stream not currently being previewed. A user can present an option to select which recorded image stream to maintain, such as being able to review each image stream and select the image stream being scanned to be maintained or deleted. Although the exemplary operation 800 shows the switching between an image stream being captured and an image stream being recorded, both image streams can be recorded, and exposure should not be limited to recording an image stream or recording as shown in the specific examples provided. [0078] [0078] Starting at 802, a first processed image stream can be generated for preview. For example, camera controller 212 (such as using the 214 image signal processor) can generate the first processed image stream. The 216 viewfinder can optionally display the zoom help preview, for example, showing the first processed image stream and visual indication (804). Referring again to Figure 7, viewfinder 216 can display the zoom help preview 702 with the visual indication [0079] [0079] If device 200 needs to switch from outputting the first processed image stream, device 200 switches to output the second processed image stream to the preview (808). In one example, camera controller 212 can switch to generate the second processed image stream. In another example, camera controller 212 generates both processed image streams, and device 200 determines the output output for viewfinder 216 from auxiliary zoom preview 702 to a preview 704 of the second stream of processed image. In this way, the display 216 can optionally present the second image stream processed (810), for example, by the preview of the user being or to be recorded. [0080] [0080] Device 200 (such as camera controller 212) can then determine from one or more criteria (such as a change in one or more criteria or a request or additional user input) if device 200 is to change of outputting the second processed image stream to the first processed image stream (812). For example, device 200 determines whether the viewfinder 216 should change to show scene part 704 being registered to the auxiliary approach preview 702 in Figure 7. If no switch is to occur, then device 200 continues to emit the second processed image stream (808). For example, viewfinder 216 continues to show scene part 704 being recorded (808). If a switch is to occur (812), device 200 switches to output the first processed image stream, for example, so that the preview zoom help can be shown on the display [0081] [0081] In some example implementations, viewfinder 216 can simultaneously display the zoom help preview and the portion of the scene being recorded. Figure 9A illustrates an exemplary view showing a simultaneous preview 902, including the zoom help preview 904 (with a visual indication 908) and the portion of scene 906 being recorded. Display 902, which may be an implementation of display 216, may additionally include a button 910 to switch between previews (such as between simultaneous display of previews 904 and 906 and a preview 906 of what is being or to be registered). While simultaneous preview 902, including previews 904 and 906, is shown side by side, previews 904 and 906 can be displayed simultaneously in any way, such as picture in picture, separate windows and so on. against. Additionally, while preview 906 is shown in letter form, only a portion of the auxiliary zoom preview 904 is shown, and each preview is shown in the middle of the display, all or any portion of each preview. - visualization 904 and 906 can be shown in any quantity of the display. For example, display 216 can display both previews 904 and 906 in a letter mode, can display only a portion of each of previews 904 and 906, allow the user to determine how previews 904 and 906 must be displayed simultaneously, or allow the user to change how much of the 902 display is to be used for each of the 904 and 906 previews. It should therefore be noted that the disclosure should not be limited to any of the sample viewers or previews provided. [0082] [0082] A user may want to switch (or device 200 may automatically switch by switching outputs of processed image streams) between a simultaneous preview display ("simultaneous preview") and the help preview of zoom. Figure 9B illustrates an exemplary 912 operation of a display switch between the simultaneous preview 902 and the zoom help preview 904. In one example, a user can press the zoom help button (ZA) to switch between 902 previews and [0083] [0083] In addition or alternatively, the display can switch between the simultaneous preview 902, the zoom help preview 904, and a 906 preview of what is or should be registered. For example, device 200 can switch between outputting the processed image streams for each of the previews. Figure 9C illustrates an example of a display switching between the auxiliary approach preview 904, the preview of the portion of the scene being or to be recorded 906 and / or the simultaneous preview 902, as indicated by the switching operations 914A - 914C. In one example, a user can press the ZA Button to cycle through the previews. In another example, the display may include multiple buttons (such as programmable buttons and / or physical buttons), so that a user can switch directly to a desired preview. In an additional example, the user can be left for one preview and capture right for the other preview (or capture in any other direction, such as up, down, or diagonally). Other criteria can additionally or alternatively be used for the device to determine to switch previews (such as camera movement, object movement and so on). [0084] [0084] Figure 10 is an illustrative flow chart representing an exemplary operation 1000 for switching between a concurrent preview and a preview of the scene being recorded. For example, camera controller 212 (or device 200) can switch from simultaneously transmitting the first processed image stream (and the visual indication) and the second processed image stream (such as to display the simultaneous preview on the 216 viewfinder) ) to preview only the second processed image stream (such as for display on the 216 display what is being recorded) for preview. In some alternative implementations, camera controller 212 (or device 200) can simultaneously output the first processed image stream and the second processed image stream for preview while switching between the simultaneous preview and a preview of what is being or to be recorded. The exemplary operation 1000 in Figure 10 is discussed with respect to Figure 9B, but similar processes or steps in the exemplary operation 1000 in Figure 10 can be used to switch between the previews illustrated in Figure 9C or other exemplary implementations. [0085] [0085] Starting at 1002, device 200 can simultaneously output the first processed image stream and the second processed image stream for a simultaneous preview on the display 216. For example, camera controller 212 (such as by using the image signal processor 214) can simultaneously output the first processed image stream and the second processed image stream. The display 216 can display the simultaneous preview 902, including the preview zoom help and a preview of what is being or being recorded (1004). The device 200 can then determine from one or more criteria (such as receiving a user command from the display 216 by the user, the zoom assist button or capturing the display, determining movement in the scene or in the camera movement, and so on) if device 200 (or camera controller 212) is able to switch from simultaneously transmitting the first and second processed image streams to output only the second processed image stream for preview (1006). For example, device 200 determines whether display 216 should switch (913) from simultaneous preview 902 to preview 904 of what is being recorded. If no switch is to occur, device 200 continues to simultaneously transmit the first and second processed image streams (1002). For example, display 216 continues to display simultaneous preview 902 (1004). [0086] [0086] If a switch is to occur (1006), device 200 switches to output the second processed image stream (1008). In one example, camera controller 212 (such as by using the image signal processor 214) can switch to output the second processed image stream. In another example, camera controller 212 outputs both processed image streams for preview, and device 200 determines the output of the first processed image stream to viewfinder 216 to present the simultaneous preview to output the second image stream processed for viewfinder 216 to be present. In this way, the display 216 can optionally display the second processed image stream (1010), the preview of what is being or to be recorded. [0087] [0087] Device 200 can then determine from one or more criteria (such as a change in one or more criteria or a request or additional user input) whether the switch outputs the second processed image stream for preview while simultaneously outputting the first and second image streams processed for preview (1012). For example, device 200 determines whether to display display 216 from displaying previews (913) from a preview 904 indicating that the scene portion is recorded in a concurrent preview 902 in figure 9A. If no switch is to occur, device 200 continues to output the second processed image stream (1008). For example, display 216 continues to display preview 904 indicating that the scene part is or should be recorded (1010). If a switch is to occur (1012), device 200 switches to simultaneously transmit the first and second processed image streams (1002). In this way, simultaneous preview 902 can be displayed by viewfinder 216. In some alternative implementations, device 200 (such as using camera controller 212) can simultaneously output both the first and second processed image streams, and the display 216 switches from displaying the preview of what is being recorded to displaying the simultaneous preview. [0088] [0088] In some exemplary implementations, the display 216 may allow the user to interact with the displayed preview (such as the user identification and / or the movement of the visual indication shown on the display 216). In some instances, if the display 216 is a touch sensitive display, the device 200 may receive from the display 216 an indication that a user has provided a pinching gesture to reduce the size of the visual indication and / or a spreading gesture. to enlarge the size of the visual indication. Additionally or alternatively, the user can provide a drag gesture to move the location of the visual indication. In some other examples, the user can provide commands via physical buttons (such as directional buttons, a zoom or scroll wheel, and so on) from a display 216 or device 200 or a microphone that receives audible commands. In some other examples, device 200 may include or be coupled to a gyroscope and / or an accelerometer that receives force commands. [0089] [0089] Figure 11A illustrates an example display 1102 that provides a zoom assist preview, in which a first visual indication 1104 is scaled to a second visual indication 1106 to indicate more than a portion of the scene being recorded. . The display 1102 can be an implementation of the display 216 or attached to the device 200. The scene portion 1108 corresponds to the first visual indication 1104, and the scene portion 1110 corresponds to the second visual indication 1106. In some instances, a user provides a gesture spreading for the first visual indication 1104 through the display 216 to switch from the first visual indication 1104 to the second visual indication 1106 and thus switching from scene part 1108 to scene part 1110 being or to be recorded. [0090] [0090] Figure 11B illustrates an example viewfinder 1112 that provides a zoom assist preview in which a first visual indication 1114 is moved to a second visual indication 1116 to indicate a different portion of the scene being recorded. Example display 1112 can be an implementation of display 216 or attached to device 200. Scenario portion 1118 corresponds to the first visual indication 1114, and scene portion 1120 corresponds to the second visual indication 1116. In some instances, a user provides a drag gesture to the first visual indication 1114 through the display 216 to switch from the first visual indication 1114 to the second visual indication 1116 (such as moving the visual indication) and thus switching from the scene part to the part of scene 1110 being or to be recorded. [0091] [0091] Figure 12 is an illustrative flow chart representing an exemplary operation 1200 to adjust a portion of the scene or to be registered based on a user input to adjust a visual indication. The exemplary operation 1200 of Figure 12 is discussed with respect to Figures 11A and 11B, but similar processes or steps in the exemplary operation 1200 of Figure 12 can be used to write what should be or be recorded for other exemplary implementations. For example, while a zoom help preview is described, the adjustment of the visual indication can occur in a preview or other previews. [0092] [0092] Starting at 1202, viewfinder 216 can display the zoom help preview (or alternatively, a simultaneous preview) including a visual indication of the part of the scene being recorded. While the display 216 shows the zoom help preview, the device 200 can receive a user input or request to adjust the visual indication (1204), such as from the display 216 providing user commands from an interaction with the display. For example, device 200 can optionally receive a user command to adjust the size of visual indication 1104 in Figure 11A (1206). Additionally or alternatively, device 200 may optionally receive a user command to move the location of the visual indication (1208). When requesting that the visual indication be adjusted (such as through interaction with the display 216), a user indicates that the device 200 is recording a different portion of the scene than currently indicated by the visual indication in the zoom help preview. [0093] [0093] In response to the user's command, the device 200 can adjust the processing of the second image stream (1210). By adjusting the processing of the second image stream (1210), the device 200 can optionally adjust the size of the part of the scene being recorded or recorded in the second image stream (1212). For example, when scene portion 1108 (Figure 11A) is issued for recording, device 200 adjusts the processing of the second image stream so that scene portion 1110 is issued for recording if the user changes the first visual indication 1104 for the second visual indication 1106. Additionally or alternatively, device 200 can optionally change the spatial location of the part of the scene being recorded (1214). For example, where scene portion 1118 (Figure 11B) is output for recording, device 200 adjusts the processing of the second image stream so that scene portion 1120 is output for recording if the user moves visual indication 1114 to the location of visual indication 1116 in the preview [0094] [0094] In some exemplary implementations, device 200 can adjust which part of the scene the second camera 204 captures, or send a command to the second camera 204 to adjust what is being captured, before applying one or more filters to the second captured image stream. In one example, camera controller 212 can send a command to second camera 204 to change the optical zoom level of second camera 204. In another example, camera controller 212 can send a command to second camera 204 to adjust the pan and / or the tilt of the second camera 204 (as for a security camera). [0095] [0095] In some other exemplary implementations, device 200 may harvest the second stream of captured image from the second camera 204 (or the second stream processed after the application of one or more filters). In this way, the device 200 can adjust the cut of the second image stream, so that only the portion of the scene associated with the adjusted visual indication is output for recording. For example, the second camera 204 captures more of the scene than is recorded (such as a digital zoom, it can be used for the second image stream to show what should be or is being recorded). If device 200 is to record a different portion of the scene currently being recorded in the second image stream, second camera 204 could capture the same scene, and device 200 (such as camera controller 212) can adjust the portion of the second stream image being registered without adjusting the second camera 204. [0096] [0096] With reference again to Figure 12, the device 200 can adjust the generation of the visual indication displayed in response to the adjustment of the processing of the second image flow and / or in response to user input to adjust the visual indication (1216). In some exemplary implementations, device 200 can optionally adjust the size of the visual cue to be displayed in the preview (1218). Additionally or alternatively, device 200 can optionally move the visual indication to the new location to be displayed in the preview (1220). [0097] [0097] In addition or alternatively to a user requesting a change in the size and / or location of the visual indication (such as by user gestures on a touch sensitive display 216 or other user inputs), the device 200 can automatically adjust the generation the size and / or location of the visual indication (and thus the portion of the scene being or to be recorded) in response to one or more criteria. In one example, if the device 200 accompanies an object for recording (such as a face, a person, soccer ball, and so on), and the camera is unable to focus on the object (such as the object is blurred in the recording), the device 200 can record more of the scene including the object (zoom the size of the visual cue and thus "zoom out" the object) in order to adjust the focus to make the object clear in the recording. For example, device 200 can send a command to the camera to magnify optically so that the scene is captured. In another example, if the scene's brightness is reduced, device 200 can command a camera to capture more of the scene for recording (which enlarges the size of the visual cue) to try to capture more ambient light. In an additional example, if device 200 is tracking a moving object, the device adjusts the portion of the scene being recorded (thus adjusting the location of the visual cue in the preview) to attempt to track the object. [0098] [0098] In some example implementations, the second camera 204 does not capture as much of the scene as the first camera 202, or the second camera 204 captures a different portion of the scene than the first camera 202. For example, some dual cameras have a first camera with a wide field of view and a second camera with a narrower field of view (such as a telephoto camera). In this way, a second camera may be unable to capture the entire scene in the first image stream. If a user of the device 200 attempts to adjust the visual cue (such as moving or composing the visual cue through viewfinder 216) in a zoom help preview for an area of the scene that is not captured by the second camera 204, the device 200 can automatically switch to use the first image stream (provided by the first camera 202) when recording the new portion of the scene. The switch from using the second image stream to the first recording image stream can be seamless and invisible to the user. For example, camera controller 212 can switch seamlessly from outputting at least a portion of the second processed image stream to recording to output at least a portion of the first processed image stream to recording. Alternatively, device 200 may notify the user (such as audible or visual notifications) that the flow used for recording has changed. In some other example implementations, both image streams can be recorded, and the device 200 can sew portions of the two image streams together to generate a processed image stream corresponding to the part of the scene indicated by the visual indication. [0099] [0099] Figure 13 illustrates an example scene 1302 being captured by the first camera 202 and an example portion 1304 of the scene captured or capable of being captured by the second camera 204. As shown, the scene portion 1304 captured by the second camera 204 is smaller than scene 1302 captured by the first camera 202. In the illustrated example, scene portion 1306 can be registered originally or be registered. If the scene portion to be recorded is set to be scene portion 1308, the second camera 204 may be unable to capture the scene portion for recording. For example, a user can gesture on the display 216 to adjust the visual indication in a zoom help preview to switch from recording portion of scene 1306 to recording portion of scene 1308, which is not captured in its entirety by the second camera [00100] [00100] Figure 13B illustrates the exemplary scene 1302 in Figure 13A captured by the first camera 202 and the scene portion 1308 being or to be recorded, wherein the first image stream (provided from the first camera 202) includes the 1308 (corresponding to image 1310) that is recorded or recorded. In this way, the second image stream (provided by the second camera 204) may not be used to record scene part 1308. [00101] [00101] Figure 14 is an illustrative flowchart representing an exemplary operation 1400 for switching between a first processed image stream and a second processed image stream for recording a part of the scenario. Starting in 1402, the display 216 can display a preview that includes a visual indication (such as a zoom help preview or a simultaneous preview). With the visual indication indicating that the portion of the scene or to be recorded is displayed on the display 216, device 200 (such as through the use of camera controller 212) can output to record the second processed image stream (1404). The device 200 can then receive a user command to adjust the visual indication (1406). For example, device 200 can receive from the display 216 a user command provided by the user through interaction with the display [00102] [00102] With device 200 to adjust the portion of the scene being registered, device 200 can adjust the visual indication so that the visual indication corresponds to the adjusted portion of the scene being registered (1412). In response to the user's command, device 200 can determine whether at least a portion of the scene for recording is out of view in the second image stream (1414). For example, device 200 determines whether scene part 1304 (Figure 13A) captured by second camera 204 does not include the entire scene 1308 for recording. If the second image stream includes the entire scene 1308 for recording, the device 200 continues to emit to record the second processed image stream (1404). For example, camera controller 212 adjusts the portion of the second image stream that is for recording and continues to generate and output the second processed image stream for recording. [00103] [00103] If the second image stream does not include the entire scene for recording (such as the second camera 204 not capturing the entire scene part 1308 in figure 13A), device 200 can switch from sending to recording the second image stream processed to output to record the first processed image stream (1416). For example, camera controller 212 can switch from outputting a portion of the second processed image stream to outputting a portion of the first processed image stream corresponding to the scenery portion 1308 in Figure 13B. In some alternative implementations, camera controller 212 can simultaneously transmit both the first processed image stream and the second processed image stream. In this way, device 200 can determine to switch from using the second processed image stream for recording to using the first processed image stream for recording part of the scene. [00104] [00104] In addition or alternative to a user requesting a change in the size and / or location of the visual indication (such as by user gestures on the display 216 or other user inputs), the device 200 can automatically switch from using the second image stream processed to use the first image stream processed for recording in response to one or more criteria of the scene, device 200 and / or other components. In some example implementations, a change in brightness can cause the device 200 to switch image streams for recording. For example, a dual camera can include a color camera and a black and white camera, where the black and white camera has a higher fidelity in low light situations than the color camera. In this way, the device 200 can determine from from using the color camera image stream to using the black and white camera image stream for recording if the scene brightness falls below a threshold. [00105] [00105] In some other example implementations, one or more of the cameras (such as the first camera 202 and the second camera 204) can be moved so that the object being tracked can fall outside the portion of the scene captured by a of the cameras. For example, if the first camera 202 includes a wide view and the second camera 204 is a telephoto camera, an overall movement in the first image stream and / or the second image stream can cause the object to come out of the field second camera 204. As a result, device 200 can switch image streams from second camera 204 to first camera 202 for recording in order to maintain object tracking. Similarly, the object being tracked can move (a local movement) so that the second image stream cannot include the object. The device 200 can switch from using the second image stream (which can be captured by a telephoto camera) to the use of the first image stream (which can be captured over a wider field of view). [00106] [00106] Figure 15 is an illustrative flow chart representing an exemplary operation 1500 for switching from a second processed image stream to a first processed image stream for recording based on one or more criteria. Beginning in 1502, device 200 can emit to record the second processed image stream. The device 200 can then determine a change in one or more features of the scene (1504). In one example, device 200 can optionally determine a change in brightness or luminance in at least one of the image streams (1506). For example, the intensity of ambient light can be determined to fall below a threshold for the current processed image current being used for recording. In another example, device 200 can optionally determine an overall movement in at least one of the image streams (1508). For example, the second camera 204 can be moved so that the first camera 202 should be used to capture the portion of the scene for recording. In a further example, device 200 can optionally determine a local movement of an object (1510). For example, the object being tracked by device 200 can move out of the field of view of the second camera 204, so the first processed image stream must be used for recording. [00107] [00107] Based on changes in one or more characteristics of the scene, the device 200 switches from the output to record the second processed image stream to output to record the first processed image stream (1512). In one example, camera controller 212 can switch from outputting the second processed image stream for recording to outputting the first processed image stream for recording, where a portion of the first and / or second processed image stream can be stored for use later. In another example, camera controller 212 can output both processed image streams, and device 200 can determine to switch from using the second processed image stream emitted to using the first processed image stream emitted for recording. [00108] [00108] In another exemplary implementation of device 200 automatically switching between image streams processed for recording, a discrepant object can harm the part of the scene being recorded. For example, when recording a child's soccer game, a person sitting in front of the work may suddenly enter the part of the scene being recorded (such as standing, nodding, and so on) so that the device 200 is not recording (or broadcasting for recording) the intended portion of the scene. If the object blocks or obstructs a small part of the scene being recorded, recording using the same image stream can continue with limited interruptions. However, if the object blocks or obstructs a large part of the scene, the device 200 can determine the attempt to use a different image stream for recording. In some example implementations, device 200 may switch to another stream of processed image to try to capture the intended portion of the scene (such as if another camera has a different location than the portion of the scene unobstructed by the foreign object or another camera has a wider field of view to zoom out to capture the scene). For example, device 200 may determine that a percentage of the scene portion is obstructed, may determine that a discrepant object is within a distance of the center of the scene being recorded, and so on. [00109] [00109] Figure 16 is an illustrative flow chart representing an exemplary operation 1600 for switching from a second processed image stream to a first processed image stream for recording based on an object obstructing a portion of the scene being or to be recorded. Beginning in 1602, device 200 can emit to record the second processed image stream. In one example, camera controller 212 can output the second processed image stream for recording. In another example, camera controller 212 can output multiple processed image streams, and device 200 can use the second processed image stream to record the part of the scene. [00110] [00110] With the second processed image stream being used to register the portion of the scene, device 200 can determine that an object moves to obstruct a portion of the scene being recorded [00111] [00111] Alternatively, if camera controller 212 emits the first processed image stream and the second processed image stream, the device 200 can switch from using the second processed image stream to using the first processed image stream for recording part of the scenario. In some exemplary implementations, device 200 can continue to track the obstruction object. If the obstruction object vacates the portion of the scene being or to be recorded, the device 200 can determine to switch processed image streams for recording. Alternatively, device 200 may continue to use the current processed image stream for recording until another criterion causes the device to determine switching using processed image streams for recording. [00112] [00112] The techniques described here can be implemented in hardware, software, firmware, or any combination thereof, unless specifically described as being implemented in a specific manner. Any features described as modules or components can also be implemented in an integrated logic device or separately as discrete but interoperable logic devices. If implemented in software, the techniques can be performed at least in part by means of a non-transient processor-readable storage medium (such as memory 208 in the example device 200 of the Figure) comprising instructions 210 which, when executed by processor 206 (or image signal processor 214), cause device 200 to perform one or more of the methods described above. The non-transitory processor-readable data storage medium may form part of a computer program product, which may include packaging materials. [00113] [00113] The non-transient processor readable storage medium may comprise random access memory (RAM) such as synchronous dynamic random access memory (SDRAM), read-only memory (ROM), non-volatile random access memory (NVRAM) , electrically erasable programmable read-only memory (EEPROM), FLASH memory, other known storage media, and the like. The techniques additionally, or alternatively, can be performed at least in part through a processor-readable communication medium that carries or communicates code in the form of instructions or data structures and that can be accessed, read and / or executed by a computer or other processor. [00114] [00114] The various illustrative logic blocks, modules, circuits and instructions described in relation to the modalities described herein can be performed by one or more processors, such as processor 206 or image signal processor 214 in the example device 200 of Figures 2A and 2B. Such processor (s) may include, but is not limited to, one or more Digital Signal Processors (DSPs), general purpose microprocessors, application specific integrated circuits (ASICs), application specific instruction set processors (Sips), arrays field programmable ports (FPGAs), or other equivalent integrated or discrete logic circuits. The term "processor", as used here, can refer to any of the foregoing structures or any other structure suitable for implementing the techniques described herein. In addition, in some respects, the functionality described here can be provided within dedicated software modules or hardware modules configured as described here. Also, the techniques can be implemented completely in one or more circuits or logic elements. A general purpose processor can be a microprocessor, [00115] [00115] Although the present description shows illustrative aspects, it should be noted that several changes and modifications can be made here without departing from the scope of the attached claims. In addition, the functions, steps or actions of the method claims in accordance with the aspects described herein need not be performed in any particular order, unless expressly stated otherwise. For example, the steps of the example operations illustrated in FIGS 6, 8, 10, 12 and 14-16, if performed by the device, camera controller, processor, or image signal processor, can be performed in any order and in any frequency (such as for each snapshot, a periodic snapshot interval in a processed image stream, a predefined time period, when user gestures are received, and so on). In addition, although the elements can be described or claimed in the singular, the plural is contemplated unless the limitation to the singular is explicitly stated. For example, although two processed image streams are described, one processed image stream or three or more processed image streams can be used in carrying out aspects of the present description. Consequently, the exposure is not limited to the illustrated examples and any means for realizing the functionality described herein are included in the aspects of the disclosure.
权利要求:
Claims (30) [1] 1. Device, comprising: memory configured to store image data; and a processor in communication with the memory, the processor being configured to: process a first image stream associated with a scene; independently, processing a second image stream associated with a spatial portion of the scene, where the second image stream is different from the first image stream; output the first processed image stream; and outputting, during the output of the first processed image stream, a visual indication that indicates the spatial portion associated with the second image stream. [2] A device according to claim 1, wherein the processor is additionally configured to: output the second processed image stream for simultaneous display of the second processed image stream and the first processed image stream. [3] A device according to claim 1, wherein the processor is additionally configured to: output the second processed image stream, wherein the device is configured to switch between displaying the first processed image stream and displaying the second stream of image image based on one or more criteria. [4] 4. Device, according to claim 1, in which the processor is additionally configured to: receive a request to adjust the visual indication being displayed; adjust the visual indication being displayed based on the request received; generate a control signal to adjust the spatial portion of the scene in the second image stream based on the request received; and processing a second adjusted image stream for the adjusted spatial portion of the scene in response to the control signal. [5] 5. Device according to claim 4, further comprising: a first camera configured to capture the first image stream; and a second camera configured to capture the second image stream and to receive the control signal, where the control signal is to adjust a zoom level of the second camera. [6] A device according to claim 4, further comprising: a touchscreen configured to: display the first processed image stream; receive a user command to adjust the visual indication being displayed while displaying the first processed image stream; and issuing the request to adjust the visual indication in response to receiving the user's command to adjust the visual indication. [7] 7. Device, according to claim 4, in which the processor is additionally configured to: transmit for recording one of: the first image stream processed for a spatial portion of the scene corresponding to the adjusted visual indication; or the second processed image stream processed; and determining the output for recording based on at least one of a size or location of an area within the scene corresponding to the adjusted visual indication. [8] 8. Device, according to claim 1, in which the processor is additionally configured to: send for recording one of: the first image stream processed for a spatial portion of the scenario corresponding to the visual indication; or the second image stream processed; and determining the output for recording based on at least one of: one or more characteristics of the scene; or movement of an object in the spatial portion of the scene corresponding to the visual indication. [9] 9. Method, comprising: processing, by a processor, a first image stream associated with a scene; independently processing, by the processor, a second image stream associated with a spatial portion of the scene, in which the second image stream is different from the first image stream; output the first processed image stream; and outputting, during the output of the first processed image stream, a visual indication that indicates the spatial portion associated with the second image stream. [10] A method according to claim 9, further comprising: outputting the second processed image stream for simultaneous display of the second processed image stream and the first processed image stream. [11] A method according to claim 9, further comprising: outputting the second processed image stream, wherein a display coupled to the processor is configured to switch between displaying the first processed image stream and displaying the second image stream based on one or more criteria. [12] 12. Method, according to claim 9, in which additionally comprises: receiving a request to adjust the visual indication being displayed; adjust the visual indication being displayed based on the request received; generate a control signal to adjust the spatial portion of the scene in the second image stream based on the request received; and processing a second adjusted image stream for the adjusted spatial portion of the scene in response to the control signal. [13] 13. The method of claim 12, further comprising: receiving the first image stream from a first camera configured to capture the first image stream; and receiving the second image stream from a second camera configured to capture the second image stream; where outputting the control signal comprises outputting the control signal to the second camera, where the control signal is to adjust the zoom level of the second camera. [14] 14. The method of claim 12, further comprising: displaying the first processed image stream on a touch sensitive display, comprising displaying the visual indication; receive, via the touchscreen, a user command to adjust the visual indication; and displaying an adjusted visual indication in response to receiving the user's command to adjust the visual indication. [15] 15. Method, according to claim 12, in which additionally comprises: transmitting for recording one of: the first image stream processed for a spatial portion of the scenario corresponding to the adjusted visual indication; or the second processed image stream processed; and determining the output for recording based on at least one of a size or location of an area within the scene corresponding to the adjusted visual indication. [16] 16. Method, according to claim 9, in which additionally comprises: sending for recording one of: the first image stream processed for a spatial portion of the scenario corresponding to the visual indication; or the second image stream processed; and determining the output for recording based on at least one of: one or more characteristics of the scene; or movement of an object in the spatial portion of the scene corresponding to the visual indication. [17] 17. Non-transitory computer-readable media that stores one or more programs that contain instructions that, when executed by one or more processors on a device, cause the device to perform operations comprising: processing, by a processor, a first image stream associated with a scene; independently processing, by the processor, a second image stream associated with a spatial portion of the scene, in which the second image stream is different from the first image stream; output the first processed image stream; and outputting, during the output of the first processed image stream, a visual indication that indicates the spatial portion associated with the second image stream. [18] 18. Non-transitory computer-readable media according to claim 17, in which the execution of the instructions causes the device to perform operations additionally comprising: outputting the second processed image stream for simultaneous display of the second processed image stream and the first image stream processed. [19] 19. Non-transitory computer-readable media according to claim 17, in which the execution of the instructions causes the device to perform operations further comprising: outputting the second processed image stream, in which the device is configured to switch between display the first image stream processed and display the second image stream based on one or more criteria. [20] 20. Non-transitory computer-readable media, according to claim 17, in which the execution of the instructions causes the device to perform operations additionally comprising: receiving a request to adjust the visual indication being displayed; adjust the visual indication being displayed based on the request received; generate a control signal to adjust the spatial portion of the scene in the second image stream based on the request received; and processing a second adjusted image stream for the adjusted spatial portion of the scene in response to the control signal. [21] 21. Non-transitory computer-readable media according to claim 20, in which the execution of the instructions causes the device to perform operations additionally comprising: receiving the first image stream from a first camera configured to capture the first stream of image; and receiving the second image stream from a second camera configured to capture the second image stream; wherein the control signal output comprises sending the control signal to the second camera, where the control signal is for adjusting the zoom level of the second camera. [22] 22. Non-transitory, computer-readable media according to claim 20, in which the execution of the instructions causes the device to perform operations further comprising: displaying the first processed image stream on a touch-sensitive display, including displaying the visual indication; receive, via the touchscreen, a user command to adjust the visual indication; and displaying an adjusted visual indication in response to receiving the user's command to adjust the visual indication. [23] 23. Non-transitory computer-readable media, according to claim 20, in which the execution of the instructions causes the device to perform operations additionally comprising: sending for recording one of: the first image stream processed to a spatial portion of the scenario corresponding to the adjusted visual indication; or the second processed image stream processed; and determining the output for recording based on at least one of a size or location of an area within the scene corresponding to the adjusted visual indication. [24] 24. Non-transitory, computer-readable media, according to claim 17, in which the execution of the instructions causes the device to perform operations additionally comprising: sending for recording one of: the first image stream processed to a spatial portion of the scenario corresponding to the visual indication; or the second image stream processed; and determining the output for recording based on at least one of: one or more characteristics of the scene; or movement of an object in the spatial portion of the scene corresponding to the visual indication. [25] 25. Device, comprising: means for processing a first image stream associated with a scene; means for independently processing a second image stream associated with a spatial portion of the scene, wherein the second image stream is different from the first image stream; means for outputting the first processed image stream; and means for emitting, during the output of the first processed image stream, a visual indication indicating the spatial portion associated with the second image stream. [26] 26. The device of claim 25, further comprising: means for outputting the second processed image stream for simultaneous display of the second processed image stream and the first processed image stream. [27] 27. The device of claim 25, further comprising: means for outputting the second processed image stream, wherein the device is configured to switch between displaying the first processed image stream and displaying the second image stream with based on one or more criteria. [28] 28. The device of claim 25, further comprising: means for receiving a request to adjust the visual indication being displayed; means for adjusting the visual indication being displayed based on the request received; means for generating a control signal to adjust the spatial portion of the scene in the second image stream based on the request received; and means for processing a second adjusted image stream for the adjusted spatial portion of the scene in response to the control signal. [29] 29. The device of claim 28, further comprising: means for receiving the first image stream from a first camera configured to capture the first image stream; and means for receiving the second image stream from a second camera configured to capture the second image stream; wherein the control signal output comprises sending the control signal to the second camera, where the control signal is for adjusting the zoom level of the second camera. [30] 30. The device of claim 28, further comprising: means for displaying the first processed image stream on a touch sensitive display, comprising the display of the visual indication; means for receiving, via the touchscreen, a user command to adjust the visual indication; and means for displaying an adjusted visual indication in response to receiving the user's command to adjust the visual indication.
类似技术:
公开号 | 公开日 | 专利标题 BR112020004680A2|2020-09-15|aid to orient a camera at different zoom levels US9813607B2|2017-11-07|Method and apparatus for image capture targeting TWI586167B|2017-06-01|Controlling a camera with face detection WO2016029641A1|2016-03-03|Photograph acquisition method and apparatus US8670047B2|2014-03-11|Image processing apparatus, imaging apparatus, image processing method, and program TW201928649A|2019-07-16|Camera zoom level and image frame capture control US20150201134A1|2015-07-16|System and media interface for multi-media production KR102018887B1|2019-09-05|Image preview using detection of body parts KR102089614B1|2020-04-14|Method for taking spherical panoramic image and an electronic device thereof WO2018058934A1|2018-04-05|Photographing method, photographing device and storage medium WO2021013147A1|2021-01-28|Video processing method, device, terminal, and storage medium KR102081934B1|2020-02-26|Head mounted display device and method for controlling the same KR20210113333A|2021-09-15|Methods, devices, devices and storage media for controlling multiple virtual characters CN111418202A|2020-07-14|Camera zoom level and image frame capture control KR101645427B1|2016-08-05|Operation method of camera apparatus through user interface US9632579B2|2017-04-25|Device and method of processing image US20140002708A1|2014-01-02|Display control apparatus and camera system JP6304398B2|2018-04-04|Image generation that combines a base image and a rearranged object from a series of images GB2581016A|2020-08-05|Electronic device, control method, program, and computer readable medium KR101720607B1|2017-03-28|Image photographing apparuatus and operating method thereof US20180063426A1|2018-03-01|Method, apparatus and computer program product for indicating a seam of an image in a corresponding area of a scene JP2020102687A|2020-07-02|Information processing apparatus, image processing apparatus, image processing method, and program JP6547809B2|2019-07-24|Image processing apparatus, image display apparatus, and imaging apparatus JP2015026880A|2015-02-05|Imaging apparatus TW201503686A|2015-01-16|Apparatus and method for reducing occurrence of object under camera shooting moving away from shooting window
同族专利:
公开号 | 公开日 US10630895B2|2020-04-21| CN111066315A|2020-04-24| US20190082101A1|2019-03-14| US20200244874A1|2020-07-30| CN113329160A|2021-08-31| WO2019051376A3|2019-04-25| EP3682627A2|2020-07-22| WO2019051376A2|2019-03-14| CN111066315B|2021-07-06| AU2018330318A1|2020-02-27| SG11202000954XA|2020-03-30| KR20200087746A|2020-07-21| TW201921909A|2019-06-01|
引用文献:
公开号 | 申请日 | 公开日 | 申请人 | 专利标题 US6803931B1|1999-11-04|2004-10-12|Kendyl A. Roman|Graphical user interface including zoom control box representing image and magnification of displayed image| WO2009141951A1|2008-05-19|2009-11-26|パナソニック株式会社|Image photographing device and image encoding device| JP5347717B2|2008-08-06|2013-11-20|ソニー株式会社|Image processing apparatus, image processing method, and program| US8488001B2|2008-12-10|2013-07-16|Honeywell International Inc.|Semi-automatic relative calibration method for master slave camera control| KR101024705B1|2009-04-21|2011-03-25|주식회사 코아로직|Apparatus and Method for Processing Image| KR101527037B1|2009-06-23|2015-06-16|엘지전자 주식회사|Mobile terminal and method for controlling the same| US10681304B2|2012-06-08|2020-06-09|Apple, Inc.|Capturing a panoramic image using a graphical user interface having a scan guidance indicator| CN104112267B|2013-04-19|2018-01-23|朱建国|A kind of simple, method of precise acquisition plane works dimensional information| US9736381B2|2014-05-30|2017-08-15|Intel Corporation|Picture in picture recording of multiple regions of interest| US20160007008A1|2014-07-01|2016-01-07|Apple Inc.|Mobile camera system| KR20160029536A|2014-09-05|2016-03-15|엘지전자 주식회사|Mobile terminal and control method for the mobile terminal| EP3958557A1|2015-04-23|2022-02-23|Apple Inc.|Digital viewfinder user interface for multiple cameras| US10291842B2|2015-06-23|2019-05-14|Samsung Electronics Co., Ltd.|Digital photographing apparatus and method of operating the same| US9749543B2|2015-07-21|2017-08-29|Lg Electronics Inc.|Mobile terminal having two cameras and method for storing images taken by two cameras| CN106713772B|2017-03-31|2018-08-17|维沃移动通信有限公司|A kind of photographic method and mobile terminal|KR20160131720A|2015-05-08|2016-11-16|엘지전자 주식회사|Mobile terminal and method for controlling the same| US11140337B2|2017-11-29|2021-10-05|Denso Corporation|Camera module| CN111641778B|2018-03-26|2021-05-04|华为技术有限公司|Shooting method, device and equipment| KR20210019110A|2018-09-21|2021-02-19|엘지전자 주식회사|Mobile terminal| US11227155B2|2019-01-23|2022-01-18|Alclear, Llc|Remote biometric identification and lighting| JP2020129716A|2019-02-07|2020-08-27|シャープ株式会社|Electronic device, control program, control device, and control method| KR20200101207A|2019-02-19|2020-08-27|삼성전자주식회사|Electronic device and method for modifying magnification of image using multiple cameras| US10956694B2|2019-04-05|2021-03-23|Zebra Technologies Corporation|Device and method for data capture aiming assistance| US11196943B2|2019-05-31|2021-12-07|Apple Inc.|Video analysis and management techniques for media capture and retention| WO2021157389A1|2020-02-03|2021-08-12|ソニーセミコンダクタソリューションズ株式会社|Electronic apparatus| US11265474B2|2020-03-16|2022-03-01|Qualcomm Incorporated|Zoom setting adjustment for digital cameras| CN113572945A|2020-04-29|2021-10-29|摩托罗拉移动有限责任公司|Viewfinder of telephoto camera|
法律状态:
2021-11-23| B350| Update of information on the portal [chapter 15.35 patent gazette]|
优先权:
[返回顶部]
申请号 | 申请日 | 专利标题 US15/701,293|2017-09-11| US15/701,293|US10630895B2|2017-09-11|2017-09-11|Assist for orienting a camera at different zoom levels| PCT/US2018/050205|WO2019051376A2|2017-09-11|2018-09-10|Assist for orienting a camera at different zoom levels| 相关专利
Sulfonates, polymers, resist compositions and patterning process
Washing machine
Washing machine
Device for fixture finishing and tension adjusting of membrane
Structure for Equipping Band in a Plane Cathode Ray Tube
Process for preparation of 7 alpha-carboxyl 9, 11-epoxy steroids and intermediates useful therein an
国家/地区
|